skip to main content


Search for: All records

Creators/Authors contains: "Fiorella, Logan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Prior research suggests most students do not glean valid cues from provided visuals, resulting in reduced metacomprehension accuracy. Across 4 experiments, we explored how the presence of instructional visuals affects students’ metacomprehension accuracy and cue-use for different types of metacognitive judgments. Undergraduates read texts on biology (Study 1a and b) or chemistry (Study 2 and 3) topics, made various judgments (test, explain, and draw) for each text, and completed comprehension tests. Students were randomly assigned to receive only texts (text-only condition) or texts with instructional visualizations (text-and-image condition). In Studies 1b, 2 and 3, students also reported the cues they used to make each judgment. Across the set of studies, instructional visualizations harmed relative metacomprehension accuracy. In Studies 1a and 2, this was especially the case when students were asked to judge how well they felt they could draw the processes described in the text. But in Study 3, this was especially the case when students were asked to judge how well they would do on a set of comprehension tests. In Studies 2 and 3, students who reported basing their judgments on representation-based cues demonstrated more accurate relative accuracy than students who reported using heuristic based cues. Further, across these studies, students reported using visual cues to make their draw judgments, but not their test or explain judgments. Taken together, these results indicate that instructional visualizations can hinder metacognitive judgment accuracy, particularly by influencing the types of cues students use to make judgments of their ability to draw key concepts.

     
    more » « less
  2. Abstract

    How do learners make sense of what they are learning? In this article, I present a new framework of sense-making based on research investigating the benefits and boundaries of generative learning activities (GLAs). The generative sense-making framework distinguishes among three primary sense-making modes—explaining, visualizing, and enacting—that each serve unique and complementary cognitive functions. Specifically, the framework assumes learners mentally organize and simulate the learning material (via the visualizing and enacting modes) to facilitate their ability to generalize the learning material (via the explaining mode). I present evidence from research on GLAs illustrating how visualizations and enactments (instructor-provided and/or learner-generated) can facilitate higher quality learner explanations and subsequent learning outcomes. I also discuss several barriers to sense-making that help explain when GLAs are not effective and describe possible ways to overcome these barriers by appropriately guiding and timing GLAs. Finally, I discuss implications of the generative sense-making framework for theory and practice and provide recommendations for future research.

     
    more » « less
  3. Abstract

    This study explored how different formats of instructional visuals affect the accuracy of students' metacognitive judgments. Undergraduates (n = 133) studied a series of five biology texts and made judgments of learning. Students were assigned randomly to study the texts only (text only), study the texts with provided visuals (provided visuals group), study the texts and generate their own visuals (learner‐generated visuals group), or study the texts and observe animations of instructor‐generated visuals (instructor‐generated visuals group). After studying the texts and making judgments of learning, all students completed multiple‐choice comprehension tests on each text. The learner‐generated and instructor‐generated visuals groups exhibited significantly higher relative judgment accuracy than the text only and provided visuals groups, though this effect was relatively small. The learner‐generated visuals group also required more study time and was more likely to report the use of visual cues when making their judgments of learning.

     
    more » « less
  4. Abstract

    Undergraduates (n = 132) learned about the human respiratory system and then taught what they learned by explaining aloud on video. Following a 2 × 2 design, students either generated their own words or visuals on paper while explaining aloud, or they viewed instructor‐provided words or visuals while explaining aloud. One week after teaching, students completed explanation, drawing, and transfer tests. Teaching with provided or generated visualizations resulted in significantly higher transfer test performance than teaching with provided or generated words. Furthermore, teaching with provided visuals led to significantly higher drawing test performance than teaching with generated visuals. Finally, the number of elaborations in students' explanations during teaching did not significantly differ across groups but was significantly associated with subsequent explanation and transfer test performance. Overall, the findings partially support the hypothesis that visuals facilitate learning by explaining, yet the benefits appeared stronger for instructor‐provided visuals than learner‐generated drawings.

     
    more » « less
  5. null (Ed.)